PCA

Introduction quizzes
(not all answered, as some are very straight forward)

Principal Axis of New Coordinate System

In [1]:
from IPython.display import YouTubeVideo
YouTubeVideo('i6zv8vyZBk0')
Out[1]:

Thrun talks about vector components in the question. It is essentially, to be in direction of new coordination system axes, how much we move in x and y direction w.r.t old coordination axes.

For eg, if for ${x}'$ as a vector, in old coordinate system, one moves 1 unit $ \Delta x$, then to reach vector's end point, you go up same 1 unit as $ \Delta y$. So that is the answer.

Second Principal Component of New System

In [2]:
from IPython.display import YouTubeVideo
YouTubeVideo('cTjBlM2ATLQ')
Out[2]:

Similarly for $y'$ vector, one has to move 1 unit left in $ \Delta x$, that is -x direction, and one unit up in y direction, that is $ \Delta y$. So (-1,1) would be $( \Delta x, \Delta y)$ for $y'$ vector (or unit vector of new coordination system)

For more info, you could refer here

Also in solution, he talks about normalization. That is basically, our current new vector $x',y'$ are of length, by pythogoras theorem, $$ \displaystyle \sqrt{{{{1}^{2}}+{{1}^{2}}}}=\sqrt{2}$$

2018-06-23_23h20_33.png

Now, for $x',y'$ to be unit vectors of our new coordinate system, they need to have length or size of 1, so we just divide all sides's values by $\sqrt{2}$

2018-06-23_23h23_03.png

Note: Sebastian makes a mistake in video, its not just $\sqrt{2}$, its $ \displaystyle \frac{1}{{\sqrt{2}}}$, also pointed out by Udacity later.

Practice Finding Centers

In [3]:
from IPython.display import YouTubeVideo
YouTubeVideo('PRjmvj6Vubs')
Out[3]:

Its (3,3) as easily seen in the graph..

Practice Finding New Axes

In [4]:
from IPython.display import YouTubeVideo
YouTubeVideo('th34aboBOO0')
Out[4]:

Let us check for each one separately. 2018-06-24_15h16_30.png

For $x'$, one has to move $ \Delta y$ down by -1 (also given in hint on right side of video), and then $ \Delta x$ right by 2 units. So $ \Delta x$ for $x'$ would be 2

For $y'$, one has to move up by $ \Delta y$ which is 2 units and then right by 1 unit indicated by $ \Delta x$. So $ \Delta y$ for $y'$ would be 2

As usual, many further quizzes on PCA are trivial, and immediately explained sufficiently already, so I am going to skip them.

Some slides that were interesting as essence according to me..

Advantages of Maximal Variance

In [5]:
from IPython.display import YouTubeVideo
YouTubeVideo('jQaYAlZ1fp0')
Out[5]:

Maximal Variance and Information Loss

In [6]:
from IPython.display import YouTubeVideo
YouTubeVideo('hfmvk8DzTGA')
Out[6]:

Note: The green has more information loss, because that height information from the green data point to the line is lost.

Maximal Variance minimizes Information Loss

In [7]:
from IPython.display import YouTubeVideo
YouTubeVideo('LTPV8lxQeZQ')
Out[7]:

PCA for Feature Transformation
PCA automatically gives us the list of features (reduced dimensionality) in order of their 'power'..

In [8]:
from IPython.display import YouTubeVideo
YouTubeVideo('8kUPRUEMCA8')
Out[8]:

Review

In [9]:
from IPython.display import YouTubeVideo
YouTubeVideo('oFBGXUUuKyI')
Out[9]:

PCA Mini Project

Let us first ensure below code works as it is given in Udacy (had to conda install pillow though)

In [10]:
#eigenfaces.py
%matplotlib inline

"""
===================================================
Faces recognition example using eigenfaces and SVMs
===================================================

The dataset used in this example is a preprocessed excerpt of the
"Labeled Faces in the Wild", aka LFW_:

  http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB)

  .. _LFW: http://vis-www.cs.umass.edu/lfw/

  original source: http://scikit-learn.org/stable/auto_examples/applications/face_recognition.html

"""
print __doc__

from time import time
import logging
import pylab as pl
import numpy as np

from sklearn.cross_validation import train_test_split
from sklearn.datasets import fetch_lfw_people
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import RandomizedPCA
from sklearn.svm import SVC

# Display progress logs on stdout
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')


###############################################################################
# Download the data, if not already on disk and load it as numpy arrays
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)

# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
np.random.seed(42)

# for machine learning we use the data directly (as relative pixel
# position info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]

# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]

print "Total dataset size:"
print "n_samples: %d" % n_samples
print "n_features: %d" % n_features
print "n_classes: %d" % n_classes


###############################################################################
# Split into a training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)

###############################################################################
# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled
# dataset): unsupervised feature extraction / dimensionality reduction
n_components = 150

print "Extracting the top %d eigenfaces from %d faces" % (n_components, X_train.shape[0])
t0 = time()
pca = RandomizedPCA(n_components=n_components, whiten=True).fit(X_train)
print "done in %0.3fs" % (time() - t0)

eigenfaces = pca.components_.reshape((n_components, h, w))

print "Projecting the input data on the eigenfaces orthonormal basis"
t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print "done in %0.3fs" % (time() - t0)


###############################################################################
# Train a SVM classification model

print "Fitting the classifier to the training set"
t0 = time()
param_grid = {
         'C': [1e3, 5e3, 1e4, 5e4, 1e5],
          'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1],
          }
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid)
clf = clf.fit(X_train_pca, y_train)
print "done in %0.3fs" % (time() - t0)
print "Best estimator found by grid search:"
print clf.best_estimator_


###############################################################################
# Quantitative evaluation of the model quality on the test set

print "Predicting the people names on the testing set"
t0 = time()
y_pred = clf.predict(X_test_pca)
print "done in %0.3fs" % (time() - t0)

print classification_report(y_test, y_pred, target_names=target_names)
print confusion_matrix(y_test, y_pred, labels=range(n_classes))


###############################################################################
# Qualitative evaluation of the predictions using matplotlib

def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
    """Helper function to plot a gallery of portraits"""
    pl.figure(figsize=(1.8 * n_col, 2.4 * n_row))
    pl.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
    for i in range(n_row * n_col):
        pl.subplot(n_row, n_col, i + 1)
        pl.imshow(images[i].reshape((h, w)), cmap=pl.cm.gray)
        pl.title(titles[i], size=12)
        pl.xticks(())
        pl.yticks(())


# plot the result of the prediction on a portion of the test set

def title(y_pred, y_test, target_names, i):
    pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]
    true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]
    return 'predicted: %s\ntrue:      %s' % (pred_name, true_name)

prediction_titles = [title(y_pred, y_test, target_names, i)
                         for i in range(y_pred.shape[0])]

plot_gallery(X_test, prediction_titles, h, w)

# plot the gallery of the most significative eigenfaces

eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)

pl.show()
===================================================
Faces recognition example using eigenfaces and SVMs
===================================================

The dataset used in this example is a preprocessed excerpt of the
"Labeled Faces in the Wild", aka LFW_:

  http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB)

  .. _LFW: http://vis-www.cs.umass.edu/lfw/

  original source: http://scikit-learn.org/stable/auto_examples/applications/face_recognition.html


C:\Users\parthi2929\Anaconda3\envs\py2\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
  "This module will be removed in 0.20.", DeprecationWarning)
C:\Users\parthi2929\Anaconda3\envs\py2\lib\site-packages\sklearn\grid_search.py:42: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.
  DeprecationWarning)
Total dataset size:
n_samples: 1288
n_features: 1850
n_classes: 7
Extracting the top 150 eigenfaces from 966 faces
C:\Users\parthi2929\Anaconda3\envs\py2\lib\site-packages\sklearn\utils\deprecation.py:58: DeprecationWarning: Class RandomizedPCA is deprecated; RandomizedPCA was deprecated in 0.18 and will be removed in 0.20. Use PCA(svd_solver='randomized') instead. The new implementation DOES NOT store whiten ``components_``. Apply transform to get them.
  warnings.warn(msg, category=DeprecationWarning)
done in 0.547s
Projecting the input data on the eigenfaces orthonormal basis
done in 0.016s
Fitting the classifier to the training set
done in 15.747s
Best estimator found by grid search:
SVC(C=1000.0, cache_size=200, class_weight='balanced', coef0=0.0,
  decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',
  max_iter=-1, probability=False, random_state=None, shrinking=True,
  tol=0.001, verbose=False)
Predicting the people names on the testing set
done in 0.047s
                   precision    recall  f1-score   support

     Ariel Sharon       0.56      0.69      0.62        13
     Colin Powell       0.80      0.87      0.83        60
  Donald Rumsfeld       0.73      0.70      0.72        27
    George W Bush       0.90      0.91      0.91       146
Gerhard Schroeder       0.88      0.84      0.86        25
      Hugo Chavez       0.91      0.67      0.77        15
       Tony Blair       0.88      0.81      0.84        36

      avg / total       0.85      0.85      0.85       322

[[  9   1   2   1   0   0   0]
 [  2  52   0   5   0   1   0]
 [  3   1  19   3   0   0   1]
 [  1   7   3 133   1   0   1]
 [  0   2   0   1  21   0   1]
 [  0   2   0   1   1  10   1]
 [  1   0   2   3   1   0  29]]

Explained Variance of Each PC

How much of the variance is explained by the first principal component? The second?

In [11]:
print pca.explained_variance_ratio_[0]
print pca.explained_variance_ratio_[1]
0.19346534
0.15116844

How Many PCs to Use?

We'll learn about the F1 score properly in the lesson on evaluation metrics, but you'll figure out for yourself whether a good classifier is characterized by a high or low F1 score. You'll do this by varying the number of principal components and watching how the F1 score changes in response.

As you add more principal components as features for training your classifier, do you expect it to get better or worse performance?

Modification:
Just change below line to more or less numbers. I tried 200 and 100.

n_components = 150

For 150:
150.png

For 200: 200.png

For 100: 100.png

So it looks like it could go either way..

F1 Score vs. No. of PCs Used

Change n_components to the following values: [10, 15, 25, 50, 100, 250]. For each number of principal components, note the F1 score for Ariel Sharon. (For 10 PCs, the plotting functions in the code will break, but you should be able to see the F1 scores.) If you see a higher F1 score, does it mean the classifier is doing better, or worse?

I am just sharing my results by re running above module as code is big and not yet modularized.

f1_PCA_10 = 0.11
f1_PCA_15 = 0.33
f1_PCA_25 = 0.62
f1_PCA_50 = 0.67
f1_PCA_100 = 0.67
f1_PCA_250 = 0.67

some more..
f1_PCA_400 = 0.69
f1_PCA_500 = 0.53

As you can see, after a point, more PCA decreases f1_score, thus not good.